perm filename 2NOTE.PUG[LET,JMC] blob
sn#552614 filedate 1978-10-29 generic text, type T, neo UTF8
39/3 }1. (McCarthy and Hayes 1969) defines an %2epistemologically adequate%1 representation of information as one that can express the information actually available to a subject under given circumstances. Thus when we see a person, parts of him are occluded, and we use our memory of previous looks at him and our general knowledge of humans to finish of a "picture" of him that includes both two and three dimensional information. We must also consider %2metaphysically adequate%1 representations that can represent complete facts ignoring the subject's ability to acquire the facts in given circumstances. Thus Laplace thought that the positions and velocities of the particles in the universe gave a metaphysically adequate representation. Metaphysically adequate representations are needed for scientific and other theories, but artificial intelligence and a full philosophical treatment of common sense experience also require epistemologically adequate representations. This paper might be summarized as contending that mental concepts are needed for an epistemologically adequate representation of facts about machines, especially future intelligent machines.
}
67/3 }2. Work in artificial intelligence is still far from showing how to reach human-level intellectual performance. Our approach to the AI problem involves identifying the intellectual mechanisms required for problem solving and describing them precisely. Therefore we are at the end of the philosophical spectrum that requires everything to be formalized in mathematical logic. It is sometimes said that one studies philosophy in order to advance beyond one's untutored naive world-view, but unfortunately for artificial intelligence, no-one has yet been able to give a description of even a naive world-view, complete and precise enough to allow a knowledge-seeking program to be constructed in accordance with its tenets.
}
81/3 }3. Present AI programs operate in limited domains, e.g. play particular games, prove theorems in a particular logical system, or understand natural language sentences covering a particular subject matter and with other semantic restrictions. General intelligence will require general models of situations changing in time, actors with goals and strategies for achieving them, and knowledge about how information can be obtained.
}
116/3 }4. Our opinion is that human intellectual structure is substantially determined by the intellectual problems humans face. Thus a Martian or a machine will need similar structures to solve similar problems. Dennett (1971) expresses similar views. On the other hand, the human motivational structure seems to have many accidental features that might not be found in Martians and that we would not be inclined to program into machines. This is not the place to present arguments for this viewpoint.
}
174/5 }5. After several versions of this paper were completed, I came across (Boden 1972) which contains (among other things) an account of the psychology of William McDougall (1877-1938) and a discussion of a hypothetical program simulating it. In my opinion, a psychology like that of McDougall is a better candidate for simulation than many more recent psychological theories, because it comes closer to presenting a theory of the organism as a whole, proposing mechanisms for thoughts, goals, and emotions. I agree with the ways in which Boden modernizes McDougall, but even with her improvements, I think the theory is a long way from being simulatable, let alone correct. One major problem is that compound %2sentiments%1, to use McDougall's term, such as reverence are diagrammed in Boden's book as essentially Boolean combinations of their component emotions. In reality they must at least be complex patterns formed from their components and other entitities. Thus we must have sentences as complex as %2reveres(person1, concept1, situation) ≡ αz.isbelief(z) ∧ ascribes(person1,z,concept1,situation1) ∧ etc%1. If I'm right about this, then every formulation of McDougall will have to be taken as merely suggestive of what terms should be in the definitions and not as actually giving them. Nevertheless, it seems to me that much can be learned from contemplating the simulation of a McDougall man. Axiomatizing the McDougall man should come before simulating it, however.
}
251/5 }6. Behavioral definitions are often favored in philosophy. A system is defined to have a certain quality if it behaves in a certain way or is %2disposed%1 to behave in a certain way. Their virtue is conservatism; they don't postulate internal states that are unobservable to present science and may remain unobservable. However, such definitions are awkward for mental qualities, because, as common sense suggests, a mental quality may not result in behavior, because another mental quality may prevent it; e.g. I may think you are thick-headed, but politeness may prevent my saying so. Particular difficulties can be overcome, but an impression of vagueness remains. The liking for behavioral definitions stems from caution, but I would interpret scientific experience as showing that boldness in postulating complex structures of unobserved entities - provided it is accompanied by a willingness to take back mistakes - is more likely to be rewarded by understanding of and control over nature than is positivistic timidity. It is particularly instructive to imagine a determined behaviorist trying to figure out an electronic computer. Trying to define each quality behaviorally would get him nowhere; only simultaneously postulating a complex structure including memory, arithmetic unit, control structure, and input-output would yield predictions that could be compared with experiment. There is a sense in which operational definitions are not taken seriously even by their proposers. Suppose someone gives an operational definition of length (e.g. involving a certain platinum bar), and a whole school of physicists and philosophers becomes quite attached to it. A few years later, someone else criticizes the definition as lacking some desirable property, proposes a change, and the change is accepted. This is normal, but if the original definition expressed what they really meant by the length, they would refuse to change, arguing that the new concept may have its uses, but it isn't what they mean by "length". This shows that the concept of "length" as a property of objects is more stable than any operational definition. Carnap has an interesting section in %2Meaning and Necessity%1 entitled "The Concept of Intension for a Robot" in which he makes a similar point saying, %2"It is clear that the method of structural analysis, if applicable, is more powerful than the behavioristic method, because it can supply a general answer, and, under favorable circumstances, even a complete answer to the question of the intension of a given predicate."%1 0 The clincher for AI, however, is an "argument from design". In order to produce desired behavior in a computer program, we build certain mental qualities into its structure. This doesn't lead to behavioral characterizations of the qualities, because the particular qualities are only one of many ways we might used to get the desired behavior, and anyway the desired behavior is not always realized.
}
272/5 }7. Putnam (1970) also proposes what amount to second order definitions for psychological properties.
}
22/6 }8. Whether a system has beliefs and other mental qualities is not primarily a matter of complexity of the system. Although cars are more complex than thermostats, it is hard to ascribe beliefs or goals to them, and the same is perhaps true of the basic hardware of a computer, i.e. the part of the computer that executes the program without the program itself.
}
229/6 }9. Our own ability to derive the laws of higher levels of organization from knowledge of lower level laws is also limited by universality. While the presently accepted laws of physics allow only one chemistry, the laws of physics and chemistry allow many biologies, and, because the neuron is a universal computing element, an arbitrary mental structure is allowed by basic neurophysiology. Therefore, to determine human mental structure, one must make psychological experiments, ⊗or determine the actual anatomical structure of the brain and the information stored in it . One cannot determine the structure of the brain merely from the fact that the brain is capable of certain problem solving performance. In this respect, our position is similar to that of the Life robot.
}